Variance Reduced Stochastic Proximal Algorithm for AUC Maximization

نویسندگان

چکیده

Stochastic Gradient Descent has been widely studied with classification accuracy as a performance measure. However, these stochastic algorithms are not applicable when non-decomposable pairwise measures used, such Area under the ROC curve (AUC), standard metric used classes imbalanced. Several have proposed for optimizing AUC metric, one of recent being Proximal Algorithm (SPAM). downside gradient descent is that it suffers from high variance leading to very slow convergence. reduced methods faster convergence guarantees than vanilla combat this issue. Again, used. In paper, we develop Variance Reduced algorithm Maximization (VRSPAM) combines two areas analyzing metrics and optimization efforts guarantee We perform an in-depth theoretical empirical analysis demonstrate our converges existing state-of-the-art maximization problem.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

High-Dimensional Variance-Reduced Stochastic Gradient Expectation-Maximization Algorithm

We propose a generic stochastic expectationmaximization (EM) algorithm for the estimation of high-dimensional latent variable models. At the core of our algorithm is a novel semi-stochastic variance-reduced gradient designed for the Qfunction in the EM algorithm. Under a mild condition on the initialization, our algorithm is guaranteed to attain a linear convergence rate to the unknown paramete...

متن کامل

Stochastic Online AUC Maximization

Area under ROC (AUC) is a metric which is widely used for measuring the classification performance for imbalanced data. It is of theoretical and practical interest to develop online learning algorithms that maximizes AUC for large-scale data. A specific challenge in developing online AUC maximization algorithm is that the learning objective function is usually defined over a pair of training ex...

متن کامل

Stochastic Variance-Reduced ADMM

The alternating direction method of multipliers (ADMM) is a powerful optimization solver in machine learning. Recently, stochastic ADMM has been integrated with variance reduction methods for stochastic gradient, leading to SAGADMM and SDCA-ADMM that have fast convergence rates and low iteration complexities. However, their space requirements can still be high. In this paper, we propose an inte...

متن کامل

Online AUC Maximization

Most studies of online learning measure the performance of a learner by classification accuracy, which is inappropriate for applications where the data are unevenly distributed among different classes. We address this limitation by developing online learning algorithm for maximizing Area Under the ROC curve (AUC), a metric that is widely used for measuring the classification performance for imb...

متن کامل

Accelerated Variance Reduced Stochastic ADMM

Recently, many variance reduced stochastic alternating direction method of multipliers (ADMM) methods (e.g. SAGADMM, SDCA-ADMM and SVRG-ADMM) have made exciting progress such as linear convergence rates for strongly convex problems. However, the best known convergence rate for general convex problems is O(1/T ) as opposed to O(1/T ) of accelerated batch algorithms, where T is the number of iter...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2021

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-030-86523-8_12